sunny day
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems
Jia, Wei, Lu, Zhaojun, Zhang, Haichun, Liu, Zhenglin, Wang, Jie, Qu, Gang
Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and have received a lot of attention recently. However, majority of the research on AEs is in the digital domain and the adversarial patches are static, which is very different from many real-world DNN applications such as Traffic Sign Recognition (TSR) systems in autonomous vehicles. In TSR systems, object detectors use DNNs to process streaming video in real time. From the view of object detectors, the traffic sign`s position and quality of the video are continuously changing, rendering the digital AEs ineffective in the physical world. In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors. Robustness is achieved in three ways. First, we simulate the in-vehicle cameras by extending the distribution of image transformations with the blur transformation and the resolution transformation. Second, we design the single and multiple bounding boxes filters to improve the efficiency of the perturbation training. Third, we consider four representative attack vectors, namely Hiding Attack, Appearance Attack, Non-Target Attack and Target Attack. We perform a comprehensive set of experiments under a variety of environmental conditions, and considering illuminations in sunny and cloudy weather as well as at night. The experimental results show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based TSR system. The attacks have good transferability and can deceive other state-of-the-art object detectors. We launched HA and NTA on a brand-new 2021 model vehicle. Both attacks are successful in fooling the TSR system, which could be a life-threatening case for autonomous vehicles. Finally, we discuss three defense mechanisms based on image preprocessing, AEs detection, and model enhancing.
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Maryland (0.04)
- Information Technology > Security & Privacy (1.00)
- Transportation > Ground > Road (0.46)
Royal Caribbean Now Sets Your Vacation Photos to Music Using AI
Everyone loves posting a good vacation photo to Instagram--but what if each one could have its own unique soundtrack, too? Royal Caribbean is experimenting with that possibility, on Tuesday launching an online tool that turns user images into kaleidoscopic mini-videos, complete with original music inspired by the visuals -- and assembled by artificial intelligence (AI). A picture from a botanical garden, of red flowers and green leaves, generates two bars of smooth jazz. In a quick snap of the Bensonhurst Statue House, the cruise line's technology recognizes a dour likeness of a face peeking over a fence, and delivers a funky nu-disco snippet, with pumping guitars and horn swells. Titled SoundSeeker, the marketer's new website chops and swirls the three photos into one spinning abstract visual sequence -- weaving in ample shots of crystal blue waters -- and strings together the music, for a total of six bars at about 100 beats per minute.
- Information Technology > Communications > Social Media (0.58)
- Information Technology > Artificial Intelligence > Applied AI (0.40)
AI Bias: When Algorithms Go Bad
Earlier this month researchers from the Massachusetts Institute of Technology and Stanford University reported that they had found that three commercial facial-analysis programs from major tech companies showed bias in both skin-type and gender. The error rates for determining the gender of light-skinned men were 0.8% compared with much higher error rates for darker-skinned women, which in some cases was as much as 20% and 34%. This is not the first time an algorithm powering an AI application has delivered an erroneous -- to say nothing of embarrassing -- result. In 2015, Flickr, a photo-sharing site owned by Yahoo launched image-recognition software that automatically created tags for photos. Some of the tags being created were highly offensive -- such as "sport" and "jungle gym" for pictures of concentration camps and "ape" for pictures of humans including an African American man.
Visualizing Convolutional Neural Networks with Open-source Picasso
While it's easier than ever to define and train deep neural networks (DNNs), understanding the learning process remains somewhat opaque. Monitoring the loss or classification error during training won't always prevent your model from learning the wrong thing or learning a proxy for your intended classification task. Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set.
technology-requirements-deep-machine-learning
Understanding key technology requirements will help technologists, management, and data scientists tasked with realizing the benefits of machine learning make intelligent decisions in their choice of hardware platforms. Deep learning is a technical term that describes a particular configuration of an artificial neural network (ANN) architecture that has many'hidden' or computational layers between the input neurons where data is presented for training or inference, and the output neuron layer where the numerical results of the neural network architecture can be read. Each step in the training process simply applies a candidate set of model parameters (as determined by a black box optimization algorithm) to inference all the examples in the training data. The reason is that numerical optimization requires repeated iterations of candidate parameter sets while the training process converges to a solution.
After reading thousands of romance books, Google's AI is writing eerie post-modern poetry
When I say "the words have no meaning to the computer" what I'm getting at is that the words hold no value to the perspective of the machine. If you tell me it's a sunny day where you are then my brain links the string of words to my own personal experience. I can link a sense of temperature on my skin to what I know as a "sunny day". The machine (aside from pre programmed info) has no frame of reference like you and I do. I know enough about computer programming that I am confident that the machine is no more knowledgeable about a sunny day as opposed to an apple cart.